Uncertainty Attribution

Introduction

Uncertainty attribution describes the process of identifying and attributing the sources of uncertainty. It involves identifying the input parameters or variables that contribute the most to the overall uncertainty in the model's output, and then determining how much each source contributes to the total uncertainty.

Uncertainty attribution is an important step in the process of uncertainty quantification, as it helps to identify the most important sources of uncertainty and prioritize efforts to reduce or eliminate them. It increases the interpretability and transparency of Bayesian deep learning models. This can be particularly useful when resources are limited and it is not possible to address all sources of uncertainty equally.

Our research focuses on two aspects of uncertainty attribution. We try to answer "where" and "what" questions. The "where" question is about identifying the locations in the input images or the input features that contribute most to the predictive uncertainty. The "what' question is about determining the factors/conditions, i.e., image noise, illumnation, resolution, occlusion, pose, etc.., that lead to high predictive uncertainty.

Recent Work

Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning

In this research, we propose a gradient-based uncertainty attribution method to identify the most problematic regions of the input that contribute most to the prediction uncertainty. The proposed method backpropagates the uncertainty score to the pixel-wise attributions, crossing the softmax probabilities, the logits, and the Bayesian neural network.

Our method can efficiently attribute the uncertainty within a single backward pass of the neural network. The uncertainty is fully explained to satisfy the completeness property such that the uncertainty can be decomposed into the sum of individual pixel attributions.

Gradient-based Uncertainty Reasoning for Classification Models

This research focuses on identifing the major factors for input data imperfections that contribute to the predictive uncertainty. Two major factors that contribute to data imperfection are data perturbation and data anomaly. The proposed method attributes the uncertainty score to different perturbation and anomaly factors, showing specific reasons of the uncertainty.

To identify these essential factors, we propose to leverage the recent development in disentangled representation learning to construct a variational autoencoder to learn the interpretable latent representations that correspond to perturbation factors.

Perturbation-based Uncertainty Attribution

Uncertainty Attribution (UA) in deep learning focuses on pinpointing the origins of uncertainty to improve model interpretability and decision-making reliability. Traditional gradient-based UA methods, favored for their straightforwardness, encounter challenges with noise in gradients and limitations in capturing significant input changes. To overcome these, we redefine UA as an optimization challenge, developing a technique to learn a binary mask that facilitates targeted uncertainty reduction via learnable perturbations. Enhanced by insights from a pre-trained SAM model, our approach employs continuous optimization, leveraging a Bernoulli distribution and Gumbel-sigmoid reparameterization for mask learning. This method not only addresses the shortcomings of gradient-based approaches by accommodating larger perturbations but also boosts interpretability. Our experimental results highlight the method's effectiveness, showcasing its advantage over contemporary gradient-based UA techniques in accurately identifying and mitigating sources of uncertainty.

Publications

  • Hanjing Wang, Dhiraj Joshi, Shiqiang Wang, and Qiang Ji. Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. [PDF]

  • Hanjing Wang, Dhiraj Joshi, Shiqiang Wang , and Qiang Ji. Semantic Attribution For Explainable Uncertainty Quantification. UAI workshop on Epistemic Uncertainty in Artificial Intelligence, 2023.